-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When de-registering a workspace cluster, mark any leftover running instances as stopped/failed #12912
Conversation
91e0678
to
742693f
Compare
Hmm, in On status update(view code)gitpod/components/ws-manager-bridge/src/bridge.ts Lines 374 to 383 in a1df2b9
gitpod/components/ws-manager-bridge/src/bridge.ts Lines 621 to 627 in a1df2b9
In irregular cases(view code)gitpod/components/ws-manager-bridge/src/bridge.ts Lines 602 to 611 in a1df2b9
This is called in the following cases:
gitpod/components/ws-manager-bridge/src/bridge.ts Lines 518 to 523 in a1df2b9
gitpod/components/ws-manager-bridge/src/bridge.ts Lines 583 to 593 in a1df2b9
Differences
Are these differences intended? Do they make sense? 🤔 EDIT: Assuming no and resolving the differences in this drive-by commit: b2aecd0 |
742693f
to
5707479
Compare
5707479
to
01262e8
Compare
re your comment: We should set @geropl 's comment here seems relevant for this change. |
@@ -120,6 +120,7 @@ export class WorkspaceManagerBridge implements Disposable { | |||
} | |||
|
|||
public stop() { | |||
this.markAllRunningWorkspaceInstancesAsStopped(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I like this approach.
So far, stopping a bridge was an operation that has absolutely no effect on workspaces. This made it a very cheap operation, and allowed for great operational flexibility. E.g., if you had to fixup a DB entry, you could always remove/re-add an entry, with very limited downsides (delay of workspace updates for a handful of seconds). Or, if you wanted for a reconnect, you could remove and re-add a DB entry. Now, it has a very destructive side-effect.
💭 I wonder what happens if we stop a ws-manager-brige
pod during a rollout. It would stop all workspaces on that cluster, no? @jankeromnes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think of having a periodic clean-up where we check and stop instances for which no ws-manager exists?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or a more general version: For all currently not-stopped instances, check it against ws-manager. This would catch a broader set of problems, but we already need to solve this problem anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think of having a periodic clean-up where we check and stop instances for which no ws-manager exists?
This is something we need anyway, but with different time constraints: This PR is about unblocking Team Workspace in specific cases. This is a separate PR, but same issue.
For all currently not-stopped instances, check it against ws-manager.
That's a layer of abstraction above this PR/issue. tl;dr: we're already doing it, this is about the implementation details Happy to provide context, outside of this PR. 🙃
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Many thanks for the great feedback here! 💯
E.g., if you had to fixup a DB entry, you could always remove/re-add an entry, with very limited downsides (delay of workspace updates for a handful of seconds).
But why remove/re-add an entry when you can gpctl clusters update
? 🤔 (E.g. to adjust the score)
Or, if you wanted for a reconnect, you could remove and re-add a DB entry. Now, it has a very destructive side-effect.
Can't you kill the ws-manager-bridge
pod, or rollout restat
its deployment to achieve this?
💭 I wonder what happens if we stop a
ws-manager-brige
pod during a rollout. It would stop all workspaces on that cluster, no? @jankeromnes
I wasn't sure, so I tested it:
- I ran
kubectl delete pod ws-manager-bridge-[tab]
several times in a row - I also ran
kubectl rollout restart deployment ws-manager-bridge
a few times
My running workspace stayed alive and well all the time.
Only when I ran gpctl clusters deregister --name temp --force
did it get marked as stopped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sync feedback from @geropl: Marking all instances as stopped in this bridge.stop()
lifecycle method is a change in behavior -- does it really make sense?
One the one hand (i.e. "my" view), bridge.stop()
is only called:
- here when a cluster was de-registered (either via RPC with --force, or manually deleted from the DB) and 2. reconcile is called. In this case, the cluster is no longer connected, and we'll never hear again about any instances that were still running on it (so we can mark them as stopped/failed)
- here when we're disposing of the bridge controller (is this actually ever called? It doesn't look like it, so maybe we could delete this dead / misleading / somewhat dangerous code)
So this means that bridge.stop()
is only ever called when you're actually de-registering a cluster, and instead of leaving all the instances as they are in the DB without any further updates, maybe it makes more sense to mark them as stopped/failed.
On the other hand (i.e. @geropl's view), maybe we need to be clearer or more careful about the fact that calling bridge.stop()
will also mark all of its instances as stopped -- i.e. this doesn't seem to be called currently when you try to reconnect/restart a bridge or so, but we need to make sure that it also won't be called in the future with the assumption that instances will be kept running). Maybe this can be achieved with a comment, or by renaming the stop
function. We could also make a bridge mark all its instances as stopped only when we receive a deregister --force RPC, but this seems a bit more complicated (need to extract/share stopping code or somehow give access to the bridge to the RPC handler), and wouldn't handle the cases where a cluster is manually dropped from the DB.
My personal conclusion here would be to leave the PR as is, and simply add very explicit comments to stress the fact that calling bridge.stop()
also marks all its instances as stopped in the DB. Or, we could even rename the method to bridge.tearDown()
or bridge.markAllInstancesAsStoppedAndDispose
or similar. 😊
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What you describe makes sense to me, it just seems that the lifecycle you are attaching the effect of stopping all instances is not well understood (I agree we should better understand it, and remove code that is not called in practice). Attaching the clean-up to the disconnect lifecycle also requires it to be always called, so that we don't leak/forget about workspaces.
It seems more general and less complicated to clean up instances for which we don't know a manager on a regular schedule instead of on disconnect events.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jankeromnes Given the mis-alignment here (which is also due to other recent PRs improving in this area: I think it make sense to ship the fixes to |
Thanks for the discussions! Moving this back to Draft, and splitting out the drive-by fix into #13074 so that it can be merged earlier. |
… leftover running instances as stopped/failed
01262e8
to
510c8f2
Compare
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Description
When de-registering a workspace cluster, mark any leftover running instances as stopped/failed in
ws-manager-bridge
.Drive-by: Delete instance tokens and track workspace stop also in irregular cases.
Related Issue(s)
Fixes #6770
How to test
From #6682 (comment):
E.g. by running:
Release Notes
Documentation
Werft options: